Goto

Collaborating Authors

 pose regression






APR-Transformer: Initial Pose Estimation for Localization in Complex Environments through Absolute Pose Regression

Ravuri, Srinivas, Xu, Yuan, Zehetner, Martin Ludwig, Motlag, Ketan, Albayrak, Sahin

arXiv.org Artificial Intelligence

Afterwards, we remove the last propagation layer and classification head and use the remaining components as backbone for our APR-Transformer. We utilize the output of the last remaining propagation layer as feature vectors F x and F q at resolutions of (N, 128, 1024). With each of the 128 vectors corresponding to a reduced point of the original 4096-point data input. The Transformer-compatible input embeddings and associated learned encodings, preserving the spatial information of the backbone outputs, are then computed by first separating the 128 vectors into eight groups based on the absolute z -coordinates, i.e., height, of their corresponding reduced points. Subsequently, the 16 feature vectors per group are sorted in a 4 4 grid based on the x and y coordinates of their reduced points. Afterward, we adapt the procedure used in the image-based APR-Transformer case by computing the learned positional encodings along the three axes to generate the final Transformer inputs. C. Pose Regression and Loss Function L p( x) = D null i =1|x p i x t i| (1) L o(q) = D null i =1|q p i q t i| (2) L pose= L p exp( s x) + s x + L o exp( s q) + s q (3) We train the model variants to minimize the position loss L p, see Equation 1, and orientation loss L o, see Equation 2, for the ground truth pose, where L p and L o are L 1 losses. We combine the position and orientation losses using the formulation by Kendall et al. [28] shown in Equation 3. Where s x and s q are learned parameters that control the balance between the position loss and the orientation loss.


Unleashing the Power of Data Synthesis in Visual Localization

Li, Sihang, Tan, Siqi, Chang, Bowen, Zhang, Jing, Feng, Chen, Li, Yiming

arXiv.org Artificial Intelligence

Visual localization, which estimates a camera's pose within a known scene, is a long-standing challenge in vision and robotics. Recent end-to-end methods that directly regress camera poses from query images have gained attention for fast inference. However, existing methods often struggle to generalize to unseen views. In this work, we aim to unleash the power of data synthesis to promote the generalizability of pose regression. Specifically, we lift real 2D images into 3D Gaussian Splats with varying appearance and deblurring abilities, which are then used as a data engine to synthesize more posed images. To fully leverage the synthetic data, we build a two-branch joint training pipeline, with an adversarial discriminator to bridge the syn-to-real gap. Experiments on established benchmarks show that our method outperforms state-of-the-art end-to-end approaches, reducing translation and rotation errors by 50% and 21.6% on indoor datasets, and 35.56% and 38.7% on outdoor datasets. We also validate the effectiveness of our method in dynamic driving scenarios under varying weather conditions. Notably, as data synthesis scales up, our method exhibits a growing ability to interpolate and extrapolate training data for localizing unseen views. Project Page: https://ai4ce.github.io/RAP/


LoGS: Visual Localization via Gaussian Splatting with Fewer Training Images

Cheng, Yuzhou, Jiao, Jianhao, Wang, Yue, Kanoulas, Dimitrios

arXiv.org Artificial Intelligence

Visual localization involves estimating a query image's 6-DoF (degrees of freedom) camera pose, which is a fundamental component in various computer vision and robotic tasks. This paper presents LoGS, a vision-based localization pipeline utilizing the 3D Gaussian Splatting (GS) technique as scene representation. This novel representation allows high-quality novel view synthesis. During the mapping phase, structure-from-motion (SfM) is applied first, followed by the generation of a GS map. During localization, the initial position is obtained through image retrieval, local feature matching coupled with a PnP solver, and then a high-precision pose is achieved through the analysis-by-synthesis manner on the GS map. Experimental results on four large-scale datasets demonstrate the proposed approach's SoTA accuracy in estimating camera poses and robustness under challenging few-shot conditions.


GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual Localization

Sidorov, Gennady, Mohrat, Malik, Lebedeva, Ksenia, Rakhimov, Ruslan, Kolyubin, Sergey

arXiv.org Artificial Intelligence

Although various visual localization approaches exist, such as scene coordinate and pose regression, these methods often struggle with high memory consumption or extensive optimization requirements. To address these challenges, we utilize recent advancements in novel view synthesis, particularly 3D Gaussian Splatting (3DGS), to enhance localization. 3DGS allows for the compact encoding of both 3D geometry and scene appearance with its spatial features. Our method leverages the dense description maps produced by XFeat's lightweight keypoint detection and description model. We propose distilling these dense keypoint descriptors into 3DGS to improve the model's spatial understanding, leading to more accurate camera pose predictions through 2D-3D correspondences. After estimating an initial pose, we refine it using a photometric warping loss. Benchmarking on popular indoor and outdoor datasets shows that our approach surpasses state-of-the-art Neural Render Pose (NRP) methods, including NeRFMatch and PNeRFLoc.


FaVoR: Features via Voxel Rendering for Camera Relocalization

Polizzi, Vincenzo, Cannici, Marco, Scaramuzza, Davide, Kelly, Jonathan

arXiv.org Artificial Intelligence

Camera relocalization methods range from dense image alignment to direct camera pose regression from a query image. Among these, sparse feature matching stands out as an efficient, versatile, and generally lightweight approach with numerous applications. However, feature-based methods often struggle with significant viewpoint and appearance changes, leading to matching failures and inaccurate pose estimates. To overcome this limitation, we propose a novel approach that leverages a globally sparse yet locally dense 3D representation of 2D features. By tracking and triangulating landmarks over a sequence of frames, we construct a sparse voxel map optimized to render image patch descriptors observed during tracking. Given an initial pose estimate, we first synthesize descriptors from the voxels using volumetric rendering and then perform feature matching to estimate the camera pose. This methodology enables the generation of descriptors for unseen views, enhancing robustness to view changes. We extensively evaluate our method on the 7-Scenes and Cambridge Landmarks datasets. Our results show that our method significantly outperforms existing state-of-the-art feature representation techniques in indoor environments, achieving up to a 39% improvement in median translation error. Additionally, our approach yields comparable results to other methods for outdoor scenarios while maintaining lower memory and computational costs.


PoseINN: Realtime Visual-based Pose Regression and Localization with Invertible Neural Networks

Zang, Zirui, Amine, Ahmad, Mangharam, Rahul

arXiv.org Artificial Intelligence

Estimating ego-pose from cameras is an important problem in robotics with applications ranging from mobile robotics to augmented reality. While SOTA models are becoming increasingly accurate, they can still be unwieldy due to high computational costs. In this paper, we propose to solve the problem by using invertible neural networks (INN) to find the mapping between the latent space of images and poses for a given scene. Our model achieves similar performance to the SOTA while being faster to train and only requiring offline rendering of low-resolution synthetic data. By using normalizing flows, the proposed method also provides uncertainty estimation for the output. We also demonstrated the efficiency of this method by deploying the model on a mobile robot.